Mimetic behaviour and institutional persistence: a two-armed bandit experiment∗
نویسندگان
چکیده
Institutions are the result of many individual decisions. Understanding how agents filter available information concerning the behaviour of others is therefore crucial. In this paper we investigate whether and how agents’ selfefficacy beliefs affect mimetic behaviour and thus, implicitly the evolution of institutions. We propose an experimental task, which is a modified version of the two-armed bandit with finite time horizon. In the first treatment, we study in detail individual learning. In the second treatment, we measure how individuals use the information they gather while observing a randomly selected group leader. We find a negative relation between self-efficacy beliefs and the propensity to emulate a peer. This might ultimately affect the likelihood of institutional change.
منابع مشابه
O ct 2 00 5 How fast is the bandit ? ∗ Damien Lamberton †
In this paper we investigate the rate of convergence of the so-called two-armed bandit algorithm in a financial context of asset allocation. The behaviour of the algorithm turns out to be highly non-standard: no CLT whatever the time scale, possible existence of two rate regimes.
متن کاملCognitive Capacity and Choice under Uncertainty: Human Experiments of Two-armed Bandit Problems
The two-armed bandit problem, or more generally, the multi-armed bandit problem, has been identified as the underlying problem of many practical circumstances which involves making a series of choices among uncertain alternatives. Problems like job searching, customer switching, and even the adoption of fundamental or technical trading strategies of traders in financial markets can be formulate...
متن کاملComplexity Constraints in Two - Armed Bandit Problems : An Example
This paper derives the optimal strategy for a two armed bandit problem under the constraint that the strategy must be implemented by a finite automaton with an exogenously given, small number of states. The idea is to find learning rules for bandit problems that are optimal subject to the constraint that they must be simple. Our main results show that the optimal rule involves an arbitrary init...
متن کاملMax K-Armed Bandit: On the ExtremeHunter Algorithm and Beyond
This paper is devoted to the study of the max K-armed bandit problem, which consists in sequentially allocating resources in order to detect extreme values. Our contribution is twofold. We first significantly refine the analysis of the ExtremeHunter algorithm carried out in Carpentier and Valko (2014), and next propose an alternative approach, showing that, remarkably, Extreme Bandits can be re...
متن کاملMulti-armed bandit experiments in the online service economy
The modern service economy is substantively different from the agricultural and manufacturing economies that preceded it. In particular, the cost of experimenting is dominated by opportunity cost rather than the cost of obtaining experimental units. The different economics require a new class of experiments, in which stochastic models play an important role. This article briefly summarizes muli...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2016